'Weak Dependency Graph [60.0]'
------------------------------
Answer:           YES(?,O(n^1))
Input Problem:    innermost runtime-complexity with respect to
  Rules:
    {  dx(X) -> one()
     , dx(a()) -> zero()
     , dx(plus(ALPHA, BETA)) -> plus(dx(ALPHA), dx(BETA))
     , dx(times(ALPHA, BETA)) ->
       plus(times(BETA, dx(ALPHA)), times(ALPHA, dx(BETA)))
     , dx(minus(ALPHA, BETA)) -> minus(dx(ALPHA), dx(BETA))
     , dx(neg(ALPHA)) -> neg(dx(ALPHA))
     , dx(div(ALPHA, BETA)) ->
       minus(div(dx(ALPHA), BETA),
             times(ALPHA, div(dx(BETA), exp(BETA, two()))))
     , dx(ln(ALPHA)) -> div(dx(ALPHA), ALPHA)
     , dx(exp(ALPHA, BETA)) ->
       plus(times(BETA, times(exp(ALPHA, minus(BETA, one())), dx(ALPHA))),
            times(exp(ALPHA, BETA), times(ln(ALPHA), dx(BETA))))}

Details:         
  We have computed the following set of weak (innermost) dependency pairs:
   {  dx^#(X) -> c_0()
    , dx^#(a()) -> c_1()
    , dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
    , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
    , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
    , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
    , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
    , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
    , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
  
  The usable rules are:
   {}
  
  The estimated dependency graph contains the following edges:
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
     ==> {dx^#(X) -> c_0()}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(a()) -> c_1()}
   {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
     ==> {dx^#(X) -> c_0()}
  
  We consider the following path(s):
   1) {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
       , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
       , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
       , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
       , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
       , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
       , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
      
      The usable rules for this path are empty.
      
        We have oriented the usable rules with the following strongly linear interpretation:
          Interpretation Functions:
           dx(x1) = [0] x1 + [0]
           one() = [0]
           a() = [0]
           zero() = [0]
           plus(x1, x2) = [0] x1 + [0] x2 + [0]
           times(x1, x2) = [0] x1 + [0] x2 + [0]
           minus(x1, x2) = [0] x1 + [0] x2 + [0]
           neg(x1) = [0] x1 + [0]
           div(x1, x2) = [0] x1 + [0] x2 + [0]
           exp(x1, x2) = [0] x1 + [0] x2 + [0]
           two() = [0]
           ln(x1) = [0] x1 + [0]
           dx^#(x1) = [0] x1 + [0]
           c_0() = [0]
           c_1() = [0]
           c_2(x1, x2) = [0] x1 + [0] x2 + [0]
           c_3(x1, x2) = [0] x1 + [0] x2 + [0]
           c_4(x1, x2) = [0] x1 + [0] x2 + [0]
           c_5(x1) = [0] x1 + [0]
           c_6(x1, x2) = [0] x1 + [0] x2 + [0]
           c_7(x1) = [0] x1 + [0]
           c_8(x1, x2) = [0] x1 + [0] x2 + [0]
        
        We have applied the subprocessor on the resulting DP-problem:
        
          'Weight Gap Principle'
          ----------------------
          Answer:           YES(?,O(n^1))
          Input Problem:    innermost DP runtime-complexity with respect to
            Strict Rules:
              {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
               , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
               , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
               , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
               , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
               , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
               , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            Weak Rules: {}
          
          Details:         
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            and weakly orienting the rules
            {}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [0]
                  minus(x1, x2) = [1] x1 + [1] x2 + [0]
                  neg(x1) = [1] x1 + [8]
                  div(x1, x2) = [1] x1 + [1] x2 + [0]
                  exp(x1, x2) = [1] x1 + [1] x2 + [0]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [1]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            and weakly orienting the rules
            {dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [0]
                  neg(x1) = [1] x1 + [2]
                  div(x1, x2) = [1] x1 + [1] x2 + [0]
                  exp(x1, x2) = [1] x1 + [1] x2 + [0]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [1]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
            and weakly orienting the rules
            {  dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [0]
                  neg(x1) = [1] x1 + [0]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [0]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [1]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
            and weakly orienting the rules
            {  dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [8]
                  neg(x1) = [1] x1 + [0]
                  div(x1, x2) = [1] x1 + [1] x2 + [7]
                  exp(x1, x2) = [1] x1 + [1] x2 + [0]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [1]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
            and weakly orienting the rules
            {  dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
             , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [7]
                  minus(x1, x2) = [1] x1 + [1] x2 + [8]
                  neg(x1) = [1] x1 + [15]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [0]
                  two() = [0]
                  ln(x1) = [1] x1 + [8]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [15]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [7]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [1]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
            and weakly orienting the rules
            {  dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
             , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
             , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [0]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [2]
                  neg(x1) = [1] x1 + [7]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [8]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [3]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [1]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [3]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [6]
              
            Finally we apply the subprocessor
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
            and weakly orienting the rules
            {  dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
             , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
             , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
             , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [8]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [2]
                  neg(x1) = [1] x1 + [1]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [8]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [6]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [7]
              
            Finally we apply the subprocessor
            'Empty TRS'
            -----------
            Answer:           YES(?,O(1))
            Input Problem:    innermost DP runtime-complexity with respect to
              Strict Rules: {}
              Weak Rules:
                {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
                 , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))}
            
            Details:         
              The given problem does not contain any strict rules
      
   2) {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
       , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
       , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
       , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
       , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
       , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
       , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
       , dx^#(a()) -> c_1()}
      
      The usable rules for this path are empty.
      
        We have oriented the usable rules with the following strongly linear interpretation:
          Interpretation Functions:
           dx(x1) = [0] x1 + [0]
           one() = [0]
           a() = [0]
           zero() = [0]
           plus(x1, x2) = [0] x1 + [0] x2 + [0]
           times(x1, x2) = [0] x1 + [0] x2 + [0]
           minus(x1, x2) = [0] x1 + [0] x2 + [0]
           neg(x1) = [0] x1 + [0]
           div(x1, x2) = [0] x1 + [0] x2 + [0]
           exp(x1, x2) = [0] x1 + [0] x2 + [0]
           two() = [0]
           ln(x1) = [0] x1 + [0]
           dx^#(x1) = [0] x1 + [0]
           c_0() = [0]
           c_1() = [0]
           c_2(x1, x2) = [0] x1 + [0] x2 + [0]
           c_3(x1, x2) = [0] x1 + [0] x2 + [0]
           c_4(x1, x2) = [0] x1 + [0] x2 + [0]
           c_5(x1) = [0] x1 + [0]
           c_6(x1, x2) = [0] x1 + [0] x2 + [0]
           c_7(x1) = [0] x1 + [0]
           c_8(x1, x2) = [0] x1 + [0] x2 + [0]
        
        We have applied the subprocessor on the resulting DP-problem:
        
          'Weight Gap Principle'
          ----------------------
          Answer:           YES(?,O(n^1))
          Input Problem:    innermost DP runtime-complexity with respect to
            Strict Rules: {dx^#(a()) -> c_1()}
            Weak Rules:
              {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
               , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
               , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
               , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
               , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
               , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
               , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
          
          Details:         
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(a()) -> c_1()}
            and weakly orienting the rules
            {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
             , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
             , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
             , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
             , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(a()) -> c_1()}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [8]
                  times(x1, x2) = [1] x1 + [1] x2 + [8]
                  minus(x1, x2) = [1] x1 + [1] x2 + [7]
                  neg(x1) = [1] x1 + [0]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [8]
                  two() = [0]
                  ln(x1) = [1] x1 + [15]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [3]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [7]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [0]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [7]
              
            Finally we apply the subprocessor
            'Empty TRS'
            -----------
            Answer:           YES(?,O(1))
            Input Problem:    innermost DP runtime-complexity with respect to
              Strict Rules: {}
              Weak Rules:
                {  dx^#(a()) -> c_1()
                 , dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
                 , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
                 , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            
            Details:         
              The given problem does not contain any strict rules
      
   3) {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
       , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
       , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
       , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
       , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
       , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
       , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))
       , dx^#(X) -> c_0()}
      
      The usable rules for this path are empty.
      
        We have oriented the usable rules with the following strongly linear interpretation:
          Interpretation Functions:
           dx(x1) = [0] x1 + [0]
           one() = [0]
           a() = [0]
           zero() = [0]
           plus(x1, x2) = [0] x1 + [0] x2 + [0]
           times(x1, x2) = [0] x1 + [0] x2 + [0]
           minus(x1, x2) = [0] x1 + [0] x2 + [0]
           neg(x1) = [0] x1 + [0]
           div(x1, x2) = [0] x1 + [0] x2 + [0]
           exp(x1, x2) = [0] x1 + [0] x2 + [0]
           two() = [0]
           ln(x1) = [0] x1 + [0]
           dx^#(x1) = [0] x1 + [0]
           c_0() = [0]
           c_1() = [0]
           c_2(x1, x2) = [0] x1 + [0] x2 + [0]
           c_3(x1, x2) = [0] x1 + [0] x2 + [0]
           c_4(x1, x2) = [0] x1 + [0] x2 + [0]
           c_5(x1) = [0] x1 + [0]
           c_6(x1, x2) = [0] x1 + [0] x2 + [0]
           c_7(x1) = [0] x1 + [0]
           c_8(x1, x2) = [0] x1 + [0] x2 + [0]
        
        We have applied the subprocessor on the resulting DP-problem:
        
          'Weight Gap Principle'
          ----------------------
          Answer:           YES(?,O(n^1))
          Input Problem:    innermost DP runtime-complexity with respect to
            Strict Rules: {dx^#(X) -> c_0()}
            Weak Rules:
              {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
               , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
               , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
               , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
               , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
               , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
               , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
          
          Details:         
            We apply the weight gap principle, strictly orienting the rules
            {dx^#(X) -> c_0()}
            and weakly orienting the rules
            {  dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
             , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
             , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
             , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
             , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
             , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
             , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            using the following strongly linear interpretation:
              Processor 'Matrix Interpretation' oriented the following rules strictly:
              
              {dx^#(X) -> c_0()}
              
              Details:
                 Interpretation Functions:
                  dx(x1) = [0] x1 + [0]
                  one() = [0]
                  a() = [0]
                  zero() = [0]
                  plus(x1, x2) = [1] x1 + [1] x2 + [8]
                  times(x1, x2) = [1] x1 + [1] x2 + [9]
                  minus(x1, x2) = [1] x1 + [1] x2 + [4]
                  neg(x1) = [1] x1 + [8]
                  div(x1, x2) = [1] x1 + [1] x2 + [8]
                  exp(x1, x2) = [1] x1 + [1] x2 + [8]
                  two() = [0]
                  ln(x1) = [1] x1 + [0]
                  dx^#(x1) = [1] x1 + [1]
                  c_0() = [0]
                  c_1() = [0]
                  c_2(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_3(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_4(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_5(x1) = [1] x1 + [0]
                  c_6(x1, x2) = [1] x1 + [1] x2 + [1]
                  c_7(x1) = [1] x1 + [0]
                  c_8(x1, x2) = [1] x1 + [1] x2 + [7]
              
            Finally we apply the subprocessor
            'Empty TRS'
            -----------
            Answer:           YES(?,O(1))
            Input Problem:    innermost DP runtime-complexity with respect to
              Strict Rules: {}
              Weak Rules:
                {  dx^#(X) -> c_0()
                 , dx^#(plus(ALPHA, BETA)) -> c_2(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(exp(ALPHA, BETA)) -> c_8(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(ln(ALPHA)) -> c_7(dx^#(ALPHA))
                 , dx^#(div(ALPHA, BETA)) -> c_6(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(neg(ALPHA)) -> c_5(dx^#(ALPHA))
                 , dx^#(minus(ALPHA, BETA)) -> c_4(dx^#(ALPHA), dx^#(BETA))
                 , dx^#(times(ALPHA, BETA)) -> c_3(dx^#(ALPHA), dx^#(BETA))}
            
            Details:         
              The given problem does not contain any strict rules